home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
Danny Amor's Online Library
/
Danny Amor's Online Library - Volume 1.iso
/
bbs
/
society
/
society.lha
/
PUB
/
isoc_news
/
1-2
/
n-1-2-030.50.2a
< prev
next >
Wrap
Text File
|
1995-07-21
|
5KB
|
84 lines
N-1-2-030.50.2 Gigabits by Craig Partridge*, craig@aland.bbn.com
The topic of this quarter's column is interfacing gigabit computers
and gigabit networks. It is a problem that has to be solved soon and
is proving surprisingly difficult.
Let's start with the computers. I have recently received the
preliminary handbook for the DEC "Alpha" chipset. Alpha is a RISC
chip design that DEC hopes to use well into the 21st century. The
first version has a 6 nanosecond clock and can do two instructions in
parallel per cycle - so at peak rates (for a few instructions) the CPU
will be running at about 300 million instructions per second. The
current chip has a 128 bit bus with a 10 ns cycle, so at peak speeds,
the processor is consuming 12.8 gigabits/second from outside the
processor. (Internally, using on chip cache memory, the processor
will actually be processing somewhat more data.) While looping in
code and smart management of caches will limit how data has to come
from peripheral devices, much of the data has got to come from some
peripheral, and some of the data is presumably going to come from the
network the computer is attached to. Another way to think about the
computer's data requirements is to remember the old rule of thumb
attributed to Amdahl: one instruction requires a bit of I/O. That
rule implies the first Alpha processor will need between 150 and 300
megabits of I/O capacity. Parallel disk systems can provide some of
that bandwidth, but the network is going to have to provide a lot too.
Attaching computers which process gigabits of data to networks which
have gigabits of bandwidth creates a computing milieu I like to refer
to as the "gigabit environment". The term gigabit environment helps
me remember that both computer and network are capable of and
interested in handling data at gigabit rates.
Several researchers have already started work on interfacing gigabit
networks and fast computers. Some of those projects, most notably
those at Bellcore, the University of Pennsylvania, MIT and DEC Western
Research Lab, have been looking at interfacing gigabit networks to
workstations. I'd like to introduce you to some of the problems these
teams are discovering because chips like the DEC Alpha and the faster
SPARC chips coming out this year will eventually appear in
workstations and those workstations will appear on our desktops. Thus
the gigabit network to workstation connection has the potential to be
a key problem in the late 1990s.
One problem that context switching in RISC processors tends to be
relatively expensive, measured in hundreds of instructions. So if a
packet arrives for an application which is not currently running, it
may take hundreds of instruction cycles to get the process running so
it can accept the packet. That's a lot of instructions, particularly
when it may take only a few hundred instructions to do all the
protocol processing on the packet. Jeff Mogul of DEC has suggested
that to avoid this problem, it may make sense in some cases to have
applications busy wait for packets.
Another problem is memory hierarchies. In part because fast memory is
very expensive, many workstations have several layers of memory
hierarchies. (Indeed, these memory hierarchies are one reason that
context switching is so expensive.) Getting data to or from a network
interface requires moving data between the bottom of the memory
hierarchy (the interface) and the top of the memory hierarchy (the
processor) and moving that data can take a long time. David
Tennenhouse of MIT has suggested that to avoid these memory-related
delays, it make make sense to make network interfaces which act like
co-processors, and attach the network interface to the co-processor
pins on the CPU chip! Current trends are to get rid of co-processor
pins because pins space is tight, but Tennenhouse's observation that
we need to look at fast paths from network to CPU is important.
Finally, folks are running into unnecessary bottlenecks in workstation
designs. For example, the University of Pennsylvania team reports
that while the IBM RS/6000 has a fast bus and a fast processor, the
I/O controller at the head of the bus is quite slow, making it
difficult to use the full power of Pennsylvania's prototype high-speed
network interface.
For more on this work, I suggest you look at Mogul and Borg's paper in
Proc. ACM ASPLOS, and the papers on ATM host interfaces in Proc. ACM
SIGCOMM '91.
* Research Scientist, BBN, and Editor-in-Chief, IEEE Network Magazine